Multimodal machine translation (MMT) aims to improve translation quality by incorporating information from other modalities, such as vision. Previous MMT systems mainly focus on better access and use of visual information and tend to validate their methods on image-related datasets. These studies face two challenges. First, they can only utilize triple data (bilingual texts with images), which is scarce; second, current benchmarks are relatively restricted and do not correspond to realistic scenarios. Therefore, this paper correspondingly establishes new methods and new datasets for MMT. First, we propose a framework 2/3-Triplet with two new approaches to enhance MMT by utilizing large-scale non-triple data: monolingual image-text data and parallel text-only data. Second, we construct an English-Chinese {e}-commercial {m}ulti{m}odal {t}ranslation dataset (including training and testing), named EMMT, where its test set is carefully selected as some words are ambiguous and shall be translated mistakenly without the help of images. Experiments show that our method is more suitable for real-world scenarios and can significantly improve translation performance by using more non-triple data. In addition, our model also rivals various SOTA models in conventional multimodal translation benchmarks.
translated by 谷歌翻译
Autoregressive language modeling (ALM) have been successfully used in self-supervised pre-training in Natural language processing (NLP). However, this paradigm has not achieved comparable results with other self-supervised approach in computer vision (e.g., contrastive learning, mask image modeling). In this paper, we try to find the reason why autoregressive modeling does not work well on vision tasks. To tackle this problem, we fully analyze the limitation of visual autoregressive methods and proposed a novel stochastic autoregressive image modeling (named SAIM) by the two simple designs. First, we employ stochastic permutation strategy to generate effective and robust image context which is critical for vision tasks. Second, we create a parallel encoder-decoder training process in which the encoder serves a similar role to the standard vision transformer focus on learning the whole contextual information, and meanwhile the decoder predicts the content of the current position, so that the encoder and decoder can reinforce each other. By introducing stochastic prediction and the parallel encoder-decoder, SAIM significantly improve the performance of autoregressive image modeling. Our method achieves the best accuracy (83.9%) on the vanilla ViT-Base model among methods using only ImageNet-1K data. Transfer performance in downstream tasks also show that our model achieves competitive performance.
translated by 谷歌翻译
Visual anomaly detection plays a crucial role in not only manufacturing inspection to find defects of products during manufacturing processes, but also maintenance inspection to keep equipment in optimum working condition particularly outdoors. Due to the scarcity of the defective samples, unsupervised anomaly detection has attracted great attention in recent years. However, existing datasets for unsupervised anomaly detection are biased towards manufacturing inspection, not considering maintenance inspection which is usually conducted under outdoor uncontrolled environment such as varying camera viewpoints, messy background and degradation of object surface after long-term working. We focus on outdoor maintenance inspection and contribute a comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which contains more than 100K high-resolution color images in various outdoor industrial scenarios. This dataset is generated by a 3D graphics software and covers both surface and logical anomalies with pixel-precise ground truth. Extensive evaluations of representative algorithms for unsupervised anomaly detection are conducted, and we expect MIAD and corresponding experimental results can inspire research community in outdoor unsupervised anomaly detection tasks. Worthwhile and related future work can be spawned from our new dataset.
translated by 谷歌翻译
In RGB-D based 6D pose estimation, direct regression approaches can directly predict the 3D rotation and translation from RGB-D data, allowing for quick deployment and efficient inference. However, directly regressing the absolute translation of the pose suffers from diverse object translation distribution between the training and testing datasets, which is usually caused by the diversity of pose distribution of objects in 3D physical space. To this end, we generalize the pin-hole camera projection model to a residual-based projection model and propose the projective residual regression (Res6D) mechanism. Given a reference point for each object in an RGB-D image, Res6D not only reduces the distribution gap and shrinks the regression target to a small range by regressing the residual between the target and the reference point, but also aligns its output residual and its input to follow the projection equation between the 2D plane and 3D space. By plugging Res6D into the latest direct regression methods, we achieve state-of-the-art overall results on datasets including Occlusion LineMOD (ADD(S): 79.7%), LineMOD (ADD(S): 99.5%), and YCB-Video datasets (AUC of ADD(S): 95.4%).
translated by 谷歌翻译
视觉任务的输出格式和相关内容差异很大,因此很难以相同的结构处理它们。一个主要障碍在于对象级别的视觉任务中的高维输出。在本文中,我们提出了一个以对象为中心的视觉框架OBJ2Seq。 OBJ2Seq将对象作为基本单元,并将大多数对象级的视觉任务视为对象的序列生成问题。因此,这些视觉任务可以分为两个步骤。首先识别给定类别的对象,然后为每个对象生成一个序列。输出序列的定义对于不同的任务有所不同,并且通过将这些序列与地面真相目标匹配来监督模型。 OBJ2SEQ能够灵活地确定输入类别以满足自定义要求,并可以轻松扩展到不同的视觉任务。在对MS Coco进行实验时,OBJ2SEQ在对象检测时可获得45.7%的AP,多标签分类的89.0%AP和人类姿势估计的65.0%AP。这些结果证明了其通常应用于不同视觉任务的潜力。代码已在以下网址提供:https://github.com/casia-iva-lab/obj2seq。
translated by 谷歌翻译
自我训练在半监督学习中表现出巨大的潜力。它的核心思想是使用在标记数据上学习的模型来生成未标记样本的伪标签,然后自我教学。为了获得有效的监督,主动尝试通常会采用动量老师进行伪标签的预测,但要观察确认偏见问题,在这种情况下,错误的预测可能会提供错误的监督信号并在培训过程中积累。这种缺点的主要原因是,现行的自我训练框架充当以前的知识指导当前状态,因为老师仅与过去的学生更新。为了减轻这个问题,我们提出了一种新颖的自我训练策略,该策略使模型可以从未来学习。具体而言,在每个培训步骤中,我们都会首先优化学生(即,在不将其应用于模型权重的情况下缓存梯度),然后用虚拟未来的学生更新老师,最后要求老师为伪标记生产伪标签目前的学生作为指导。这样,我们设法提高了伪标签的质量,从而提高了性能。我们还通过深入(FST-D)和广泛(FST-W)窥视未来,开发了我们未来自我训练(FST)框架的两个变体。将无监督的域自适应语义分割和半监督语义分割的任务作为实例,我们在广泛的环境下实验表明了我们方法的有效性和优越性。代码将公开可用。
translated by 谷歌翻译
很少有6D姿势估计方法使用骨干网络从RGB和深度图像中提取功能,而Uni6D是这样做的先驱。我们发现UNI6D中性能限制的主要原因是实例外部和实例 - 内噪声。 uni6d不可避免地会由于其固有的直接管道设计而从接收场中的背景像素引入实例外部噪声,并忽略了输入深度数据中的实例 - 内侧噪声。在这项工作中,我们提出了一种两步的denoising方法,以处理UNI6D中上述噪声。在第一步中,实例分割网络用于裁剪和掩盖实例,以消除非实施区域的噪声。在第二步中,提出了一个轻巧的深度剥夺模块,以校准深度特征,然后再将其输入姿势回归网络。广泛的实验表明,我们称为uni6dv2的方法能够有效,稳健地消除噪声,在不牺牲过多的推理效率的情况下超过UNI6D。它还减少了对需要昂贵标签的注释真实数据的需求。
translated by 谷歌翻译
政策优化方法是使用最广泛使用的加固学习(RL)算法之一。然而,对这些方法的理论理解仍然不足。即使在eoisodic(时代)的表格设置中,\ citet的基于政策方法的最先进的理论结果也是只需$ \ tilde {o}(\ sqrt {s ^ 2ah ^ 4k })$何地在$ S $是州的数量,$ a $是行动的数量,$ h $是地平线,而$ k $是剧集的数量,还有$ \ sqrt {sh} $与信息理论下限$ \ tilde {\ omega}相比,差距(\ sqrt {sah ^ 3k})$。为了弥合这样的差距,我们提出了一种新的算法基于参考的基于参考的策略优化,在任何时间保证(\ AlgnameAcro),它具有“随时稳定”的特征。我们证明我们的算法实现$ \ tilde {o}(\ sqrt {sah ^ 3k} + \ sqrt {ah ^ 4})$后悔。当$ s> h $时,我们的算法在忽略对数因子时最佳最佳。为了我们的最佳知识,RPO-SAT是第一次计算上高效,几乎最小的表格RL最佳策略算法。
translated by 谷歌翻译
无监督的异常检测和定位对于采集和标记足够的异常数据时对实际应用至关重要。基于现有的基于表示的方法提取具有深度卷积神经网络的正常图像特征,并通过非参数分布估计方法表征相应的分布。通过测量测试图像的特征与估计分布之间的距离来计算异常分数。然而,当前方法无法将图像特征与易解基本分布有效地映射到局部和全局特征之间的关系,这些功能与识别异常很重要。为此,我们提出了使用2D标准化流动实现的FastFlow,并将其用作概率分布估计器。我们的FastFlow可用作具有任意深度特征提取器的插入式模块,如Reset和Vision变压器,用于无监督的异常检测和定位。在训练阶段,FastFlow学习将输入视觉特征转换为贸易分布并获得识别推理阶段中的异常的可能性。 MVTEC AD数据集的广泛实验结果显示,在具有各种骨干网络的准确性和推理效率方面,FastFlow在先前的最先进的方法上超越了先前的方法。我们的方法通过高推理效率达到异常检测中的99.4%AUC。
translated by 谷歌翻译
无监督异常检测的本质是学习正常样品的紧凑分布并将异常值视为测试异常。同时,现实世界中的异常通常在高分辨率图像中尤其是工业应用中微妙而细粒度。为此,我们为无监督的异常检测和定位提出了一个新的框架。我们的方法旨在通过粗到1的比对过程从正常图像中学习致密和紧凑的分布。粗对齐阶段标准化了对象在图像和特征级别中的像素位置。然后,细胞对齐阶段密集地最大程度地提高了批处理中所有相应位置之间特征的相似性。为了仅使用正常图像来促进学习,我们提出了一个新的借口任务,称为“对齐阶段”,称为非对抗性学习。非对比度学习提取鲁棒和区分正常图像表示,而无需对异常样本进行假设,因此它使我们的模型能够推广到各种异常场景。对MVTEC AD和Bentech AD的两个典型工业数据集进行了广泛的实验表明,我们的框架有效地检测各种现实世界缺陷,并在工业无监督的异常检测中实现了新的最新技术。
translated by 谷歌翻译